Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Detection method of domains generated by dictionary-based domain generation algorithm
ZHANG Yongbin, CHANG Wenxin, SUN Lianshan, ZHANG Hang
Journal of Computer Applications    2021, 41 (9): 2609-2614.   DOI: 10.11772/j.issn.1001-9081.2020111837
Abstract397)      PDF (893KB)(298)       Save
The composition of domain names generated by the dictionary-based Domain Generation Algorithm (DGA) is very similar to that of benign domain names and it is difficult to effectively detect them with the existing technology. To solve this problem, a detection model was proposed, namely CL (Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) network). The model includes three parts:character embedding layer, feature extraction layer and fully connected layer. Firstly, the characters of the input domain name were encoded by the character embedding layer. Then, the features of the domain name were extracted by connecting CNN and LSTM in serial way through the feature extraction layer. The n-grams features of the domain name were extracted by CNN and the extracted result were sent to LSTM to learn the context features between n-grams. Meanwhile, different combinations of CNNs and LSTMs were used to learn the features of n-grams with different lengths. Finally, the dictionary-based DGA domain names were classified and predicted by the fully connected layer according to the extracted features. Experimental results show that when the CNNs select the convolution kernel sizes of 3 and 4, the proposed model achives the best performance. In the four dictionary-based DGA family experiments, the accuracy of the CL model is improved by 2.20% compared with that of the CNN model. And with the increase of the number of sample families, the CL network model has a better stability.
Reference | Related Articles | Metrics
Design and implementation of high-interaction programmable logic controller honeypot system based on industrial control business simulation
ZHAO Guoxin, DING Ruofan, YOU Jianzhou, LYU Shichao, PENG Feng, LI Fei, SUN Limin
Journal of Computer Applications    2020, 40 (9): 2650-2656.   DOI: 10.11772/j.issn.1001-9081.2019122214
Abstract541)      PDF (1350KB)(493)       Save
The capability of entrapment is significantly influenced by the degree of simulation in industrial control honeypots. In view of the lack of business logic simulation of existing industrial control honeypots, the high-interaction Programmable Logic Controller (PLC) honeypot design framework and implementation method based on industrial control business simulation were proposed. First, based on the interaction level of industrial control system, a new classification method of Industrial Control System (ICS) honeypots was proposed. Then, according to different simulation dimensions of ICS devices, the entrapment process in honeypot was divided into a process simulation cycle and a service simulation cycle. Finally, in order to realize the real-time response to business logic data, the process data was transferred to the service simulation cycle through a customized data transfer module. Combining typical ICS honeypot software Conpot and the modeling simulation tool Matlab/Simulink, the experiments were carried out with Siemens S7-300 PLC device as the reference, and so as to realize the collaborative work of information service simulation and control process simulation. The experimental results show that compared with Conpot, the proposed PLC honeypot system newly adds 11 private functions of Siemens S7 devices. Especially, the operating read (function code 04 Read) and write (function code 05 Write) in the new functions realize 7 channel monitoring for I area data and 1 channel control for Q area data in PLC. This new honeypot system breaks through the limitations of existing interaction levels and methods and finds new directions for ICS honeypot design.
Reference | Related Articles | Metrics
Multi-extended target tracking algorithm based on improved K-means++ clustering
YU Haofang, SUN Lifan, FU Zhumu
Journal of Computer Applications    2020, 40 (1): 271-277.   DOI: 10.11772/j.issn.1001-9081.2019061057
Abstract367)      PDF (1062KB)(357)       Save
In order to solve the problem of low partition accuracy of measurement set and high computational complexity, a Gaussian-mixture hypothesis density intensity multi-extended target tracking algorithm based on improved K-means++ clustering algorithm was proposed. Firstly, the traversal range of K value was narrowed according to the situations that the targets may change at the next moment. Secondly, the predicted states of targets were used to select the initial clustering centers, providing a basis for the correct partition of measurement set to improve the accuracy of clustering algorithm. Finally, the proposed improved K-means++ clustering algorithm was applied to the Gaussian-mixture probability hypothesis filter to jointly estimate the number and states of multiple targets. The simulation results show that the average tracking time of the proposed algorithm is reduced by 59.16% and 53.25% respectively, compared with that of multi-extended target tracking algorithms based on distance partition and K-means++. Meanwhile, the Optimal Sub-Pattern Assignment (OSPA) of the proposed algorithm is much lower than that of above two algorithms. In summary, the algorithm can greatly reduce the computational complexity and achieve better tracking performance than existing measurement set partition methods.
Reference | Related Articles | Metrics
k nearest neighbor query based on parallel ant colony algorithm in obstacle space
GUO Liangmin, ZHU Ying, SUN Liping
Journal of Computer Applications    2019, 39 (3): 790-795.   DOI: 10.11772/j.issn.1001-9081.2018081647
Abstract410)      PDF (932KB)(258)       Save
To solve the problem of k nearest neighbor query in obstacle space, a k nearest neighbor Query method based on improved Parallel Ant colony algorithm (PAQ) was proposed. Firstly, ant colonies with different kinds of pheromones were utilized to search k nearest neighbors in parallel. Secondly, a time factor was added as a condition of judging path length to directly show the searching time of ants. Thirdly, the concentration of initial pheromone was redefined to avoid the blind searching of ants. Finally, visible points were introduced to divide the obstacle path into multiple Euclidean paths, meawhile the heuristic function was improved and the visible points were selected by ants to conduct probability transfer making ants search in more proper direction and prevent the algorithm from falling into local optimum early. Compared to WithGrids method, with number of data points less than 300, the running time for line segment obstacle is averagely reduced by about 91.5%, and the running time for polygonal obstacle is averagely reduced by about 78.5%. The experimental results show that the running time of the proposed method has obvious advantage on small-scale data, and the method can process polygonal obstacles.
Reference | Related Articles | Metrics
Density peaks clustering algorithm based on shared near neighbors similarity
BAO Shuting, SUN Liping, ZHENG Xiaoyao, GUO Liangmin
Journal of Computer Applications    2018, 38 (6): 1601-1607.   DOI: 10.11772/j.issn.1001-9081.2017122898
Abstract825)      PDF (1016KB)(431)       Save
Density peaks clustering is an efficient density-based clustering algorithm. However, it is sensitive to the global parameter dc. Furthermore, artificial intervention is needed for decision graph to select clustering centers. To solve these problems, a new density peaks clustering algorithm based on shared near neighbors similarity was proposed. Firstly, the Euclidean distance and shared near neighbors similarity were combined to define the local density of a sample, which avoided the setting of parameter dc of the original density peaks clustering algorithm. Secondly, the selection process of clustering centers was optimized to select initial clustering centers adaptively. Finally, each sample was assigned to the cluster as its nearest neighbor with higher density samples. The experimental results show that, compared with the original density peaks clustering algorithm on the UCI datasets and the artificial datasets, the average values of accuracy, Normalized Mutual Information (NMI) and F-Measure of the proposed algorithm are respectively increased by about 22.3%, 35.7% and 16.6%. The proposed algorithm can effectively improve the accuracy of clustering and the quality of clustering results.
Reference | Related Articles | Metrics
Simplified Slope One algorithm for online rating prediction
SUN Limei, LI Yue, Ejike Ifeanyi Michael, CAO Keyan
Journal of Computer Applications    2018, 38 (2): 497-502.   DOI: 10.11772/j.issn.1001-9081.2017082493
Abstract419)      PDF (939KB)(455)       Save
In the era of big data, personalized recommendation system is an effective means of information filtering. One of the main factors that affect the prediction accuracy is data sparsity. Slope One online rating prediction algorithm uses simple linear regression model to solve data sparisity problem, which is easy to implement and has quick score rating, but its training stage needs to be offline because of high time and space consumption when generating differences between items. To solve above problems, a simplified Slope One algorithm was proposed, which simplified the most time-consuming procedure in Slope One algorithm when generating items' rating difference in the training stage by using each item's historical average rating to get the rating difference. The simplified algorithm reduces the time and space complexity of the algorithm, which can effectively improve the utilization rate of the rating data and has better adaptability to sparse data. In the experiments, rating records in Movielens data set were ordered by timestamps then divided into the training set and test set. The experimental results show that the accuracy of the proposed simplified Slope One algorithm is closely approximated to the original Slope One algorithm, but the time and space complexity are lower than that of Slope One, it means that the simplified Slope One algorithm is more suitable for large-scale recommendation system applications with rapid growth of data.
Reference | Related Articles | Metrics
Spectral clustering algorithm based on differential privacy protection
ZHENG Xiaoyao, CHEN Dongmei, LIU Yuqing, YOU Hao, WANG Xiangshun, SUN Liping
Journal of Computer Applications    2018, 38 (10): 2918-2922.   DOI: 10.11772/j.issn.1001-9081.2018040888
Abstract723)      PDF (753KB)(400)       Save
Aiming at the problem of privacy leakage in the application of traditional clustering algorithm, a spectral clustering algorithm based on differential privacy protection was proposed. Based on the differential privacy model, the cumulative distribution function was used to generate random noise that satisfies Laplasse distribution. Then the noise was added to the sample similarity function calculated by the spectral clustering algorithm, which disturbed the weight values between the individual samples and realized information hiding between sample individuals for privacy protection. Experimental results of UCI dataset verify that the proposed algorithm can achieve effective data clustering within a certain degree of information loss, and can also protect clustered data.
Reference | Related Articles | Metrics
Research and application for terminal location management system based on firmware
SUN Liang, CHEN Xiaochun, ZHENG Shujian, LIU Ying
Journal of Computer Applications    2017, 37 (2): 417-421.   DOI: 10.11772/j.issn.1001-9081.2017.02.0417
Abstract607)      PDF (848KB)(547)       Save
Pasting the Radio Frequency Identification (RFID) tag on the shell of computer so that to trace the location of computer in real time has been the most frequently used method for terminal location management. However, RFID tag would lose the direct control of the computer when it is out of the authorized area. Therefore, the terminal location management system based on the firmware and RFID was proposed. First of all, the authorized area was allocated by RFID radio signal. The computer was allowed to boot only if the firmware received the authorized signal of RFID on the boot stage via the interaction between the firmware and RFID tag. Secondly, the computer could function normally only if it received the signal of RFID when operation system is running. At last, the software Agent of location management would be protected by the firmware to prevent it from being altered and deleted. The scenario of the computer out of the RFID signal coverage would be caught by the software Agent of the terminal; and the terminal would then be locked and data would be destroyed. The terminal location management system prototype was deployed in the office area to control almost thirty computers so that they were used normally in authorized areas and locked immediately once out of authorized areas.
Reference | Related Articles | Metrics
Feature selection method of high-dimensional data based on random matrix theory
WANG Yan, YANG Jun, SUN Lingfeng, LI Yunuo, SONG Baoyan
Journal of Computer Applications    2017, 37 (12): 3467-3471.   DOI: 10.11772/j.issn.1001-9081.2017.12.3467
Abstract566)      PDF (734KB)(686)       Save
The traditional feature selection methods always remove redundant features by using correlation measures, and it is not considered that there is a large amount of noise in a high-dimensional correlation matrix, which seriously affects the feature selection result. In order to solve the problem, a feature selection method based on Random Matrix Theory (RMT) was proposed. Firstly, the singular values of a correlation matrix which met the random matrix prediction were removed, thereby the denoised correlation matrix and the number of selected features were obtained. Then, the singular value decomposition was performed on the denoised correlation matrix, and the correlation between feature and class was obtained by decomposed matrix. Finally, the feature selection was accomplished according to the correlation between feature and class and the redundancy between features. In addition, a feature selection optimization method was proposed, which furtherly optimize the result by comparing the difference between singular value vector and original singular value vector and setting each feature as a random variable in turn. The classification experimental results show that the proposed method can effectively improve the classification accuracy and reduce the training data scale.
Reference | Related Articles | Metrics
Short-term lightning prediction based on multi-machine learning competitive strategy
SUN LiHua, YAN Junfeng, XU Jianfeng
Journal of Computer Applications    2016, 36 (9): 2555-2559.   DOI: 10.11772/j.issn.1001-9081.2016.09.2555
Abstract525)      PDF (789KB)(371)       Save
The traditional lightning data forecasting methods often use single optimal machine learning algorithm to forecast, not considering the spatial and temporal variations of meteorological data. For this phenomenon, an ensemble learning based multi-machine learning model was put forward. Firstly, attribute reduction was conducted for meteorological data to reduce dimension; secondly, multiple heterogeneous machine learning classifiers were trained on data set and optimal base classifier was screened based on predictive quality; finally, the final classifier was generated after weighted training for optimal base classifier by using ensemble strategy. The experimental results show that, compared with the traditional single optimal algorithm, the prediction accuracy of the proposed model is increased by 9.5% on average.
Reference | Related Articles | Metrics
Camera calibration method of surgical navigation based on C-arm
ZHANG Jianfa, ZHANG Fengfeng, SUN Lining, KUANG Shaolong
Journal of Computer Applications    2016, 36 (8): 2327-2331.   DOI: 10.11772/j.issn.1001-9081.2016.08.2327
Abstract685)      PDF (756KB)(506)       Save
Concerning the problem that too many transitional links and complex parameter solving process existed in camera calibration of the surgical navigation based on C-arm, a new method that completely ignored the camera model was proposed. In this method, the camera model was completely ignored, and the transition link in the process of solving mapping parameters was simplified, which increased the efficiency. In addition, the camera calibration was achieved by distinguishing the projection data from the calibration target which has double-layer metal ball. In the calibration point verification experiment, it can be proved that the residual error of each test point was no more than 0.002 pixels; in the navigation validation experiment, probe point and perforation test were successfully implemented with the established preliminary experiment platform. The experimental results verify that the proposed camera calibration method can meet the accuracy requirements of surgical navigation system.
Reference | Related Articles | Metrics
Retrieval method of images based on robust Cosine-Euclidean metric dimensionality reduction
HUANG Xiaodong, SUN Liang
Journal of Computer Applications    2016, 36 (8): 2292-2295.   DOI: 10.11772/j.issn.1001-9081.2016.08.2292
Abstract424)      PDF (766KB)(367)       Save
Focusing on the issues that the Principal Component Analysis (PCA) related dimensionality reduction methods are limited to deal with nonlinear distributed datasets and have poor robustness, a new dimensionality reduction method named Robust Cosine-Euclidean Metric (RCEM) was proposed. Considering that Cosine Metric (CM) can handle the outliers efficiently and Euclidean distance can well maintain variance information of samples, the CM was used to describe the geometric characteristics of neighborhood and the Euclidean distance was used to depict the global distribution of dataset. This new proposal method retained local information of dataset while achieving the unification of local and global structure, thus it increased the robustness of local dimensionality reduction algorithm and helped avoiding the problem of small sample size cases. The experimental results on Corel-1000 dataset showed that the retrieval average precision of RCEM was 5.61% higher than that of Angle Optimization Global Embedding (AOGE), and the retrieval time of RCEM was decreased by 42% compared with dimensionality reduction free method. The results indicate that RCEM can improve the efficiency of image retrieval without decreasing the retrieval accuracy, and it can be effectively applied to Content-Based Image Retrieval (CBIR).
Reference | Related Articles | Metrics
Image super-resolution reconstruction based on local regression model
LI Xin, CUI Ziguan, SUN Linhui, ZHU Xiuchang
Journal of Computer Applications    2016, 36 (6): 1654-1658.   DOI: 10.11772/j.issn.1001-9081.2016.06.1654
Abstract552)      PDF (798KB)(336)       Save
Image Super-Resolution (SR) algorithms based on sparse reconstruction generally require external training samples. The shortcoming of these algorithms is that the reconstruction quality depends on the similarity between the image to be reconstructed and the training sample. In order to solve the problem, an image super-resolution reconstruction algorithm based on local regression model was proposed. Using the fact that the local image structure would repeat in the corresponding position of different image scales, a first-order approximation model of the nonlinear mapping function from low to high resolution image patches was built for super-resolution reconstruction. The prior model of the nonlinear mapping function was established by handling the in-place example pair of the input image and its low frequency band image with dictionary learning. During the reconstruction of the image block, the non-local self-similarity of image was used and the first-order regression model was applied to multiple non-local self-similarity patches respectively, the high-resolution image patch could be obtained through weighted summing. The experimental results show that, compared with other super-resolution algorithms which also make use of image self-similarity, the average Peak Signal-to-Noise Ratio (PSNR) of the reconstructed images of the proposed algorithm is increased by 0.3~1.1 dB, and the subjective reconstruction effect of the proposed algorithm is improved significantly as well.
Reference | Related Articles | Metrics
Temporal similarity algorithm of coarse-granularity based dynamic time warping
CHEN Mingwei, SUN Lihua, XU Jianfeng
Journal of Computer Applications    2016, 36 (6): 1639-1644.   DOI: 10.11772/j.issn.1001-9081.2016.06.1639
Abstract479)      PDF (974KB)(430)       Save
The Dynamic Time Warping (DTW) algorithm cannot keep high classification accuracy while improving the computation speed. In order to solve the problem, a Coarse-Granularity based Dynamic Time Warping (CG-DTW) algorithm based on the idea of naive granular computing was proposed. First of all, the better temporal granularities were obtained by computing temporal variance features, and the original series were replaced by granularity features. Then, the relatively optimal corresponding temporal granularity was obtained by executing DTW with dynamically adjusting intergranular elasticity of granularities compared. Finally, the DTW distance was calculated in the case of the corresponding optimal granularity. During this progress, an early termination strategy of lower bound function was introduced for further improving the CG-DTW algorithm efficiency. The experimental results show that, the proposed algorithm was better than classical algorithm in running rate with increasing by about 21.4%, and better than dimension reduction strategy algorithm in accuracy with increasing by about 32.3 percentage points.Especially for the long time sequences classification, CG-DTW takes consideration into both high computing speed and better classification accuracy. In actual applications, CG-DTW can adapt to long time sequences classification with uncertain length.
Reference | Related Articles | Metrics
Enterprise abbreviation prediction based on constitution pattern and conditional random field
SUN Liping, GUO Yi, TANG Wenwu, XU Yongbin
Journal of Computer Applications    2016, 36 (2): 449-454.   DOI: 10.11772/j.issn.1001-9081.2016.02.0449
Abstract796)      PDF (990KB)(1004)       Save
With the continuous development of enterprise marketing, the enterprise abbreviation has been widely used. Nevertheless, as one of the main sources of unknown words, the enterprise abbreviation can not be effectively identified. A methodology on predicting enterprise abbreviation based on constitution pattern and Conditional Random Field (CRF) was proposed. First, the constitution patterns of enterprise name and abbreviation were summarized from the perspective of linguistics, and the Bi-gram algorithm was improved by a combination of lexicon and rules, namely CBi-gram. CBi-gram algorithm was used to realize the automatic segmentation of the enterprise name and improve the recognition accuracy of the company's core word. Then the enterprise type was subdivided by CBi-gram, and the abbreviation rule sets were collected by artificial summary and self-learning method to reduce noise caused by unsuitable rules. Besides, in order to make up the limitations of artificial building rules on abbreviations and mixed abbreviation, the CRF was introduced to generate enterprise abbreviation statistically, and word, tone and word position were used as characteristics to train model as supplementary. The experimental results show that the method exhibites a good performance and the output can fundamentally cover the usual range of enterprise abbreviations.
Reference | Related Articles | Metrics
Application of factorization machine in mobile App recommendation based on deep packet inspection
SUN Liangjun, FAN Jianfeng, YANG Wanqi, SHI Yinhuan
Journal of Computer Applications    2016, 36 (2): 307-310.   DOI: 10.11772/j.issn.1001-9081.2016.02.0307
Abstract548)      PDF (550KB)(1101)       Save
To extract features from Deep Packet Inspection (DPI) data and perform mobile application recommendation, using the DPI data collected from Internet Service Provider (ISP) in Jiangsu Telecom, the access history data of active users defined by the communications operator was processed by matrix factorization recommendation (including Singular Value Decomposition (SVD) and Non-negtive Matrix Factorization (NMF)), SVD recommendation and factorization machine recommendation algorithms for mobile phone application recommendation. The results show that factorization machine algorithm achieves better performance, it means that factorization machine algorithm can better describe the latent connection in the user-item relationship.
Reference | Related Articles | Metrics
Mechanism of parked domain recognition based on authoritative domain name servers
LIU Mei, ZHANG Yongbin, RAN Chongshan, SUN Lianshan
Journal of Computer Applications    2016, 36 (12): 3311-3316.   DOI: 10.11772/j.issn.1001-9081.2016.12.3311
Abstract671)      PDF (897KB)(425)       Save
The massive parked domains exist in the Internet, which seriously affect the Internet experience and Internet environment of online users when surfing. In order to recognize parked domains, a new method of parked domain recognition was proposed based on authoritative Domain Name Server (DNS). The set of authoritative DNS which could be used for domain parking service was extracted by the typosquatting domains commonly used in domain parking service. Then the set was clustered by semi-supervised clustering method to identify the authoritative DNS associated with domain parking service. When detecting a parked domain, the parked domain was recognized by the judgments that whether its authoritative DNS was applied in domain parking service and whether its mapped IP addresses was concluded in the set of IP addresses of parking Web servers. By using the existing detecting method based on webpages' features, the accuracy of the proposed method was analyzed. The experimental results show the proposed method has achieved the accuracy rate of 92.8%, and avoids crawling the webpage information, which has a good performance on parked domains detection in real-time.
Reference | Related Articles | Metrics
Cloud resource sharing design supporting multi-attribute range query based on SkipNet
SUN Lihua, CHEN Shipin
Journal of Computer Applications    2016, 36 (1): 72-76.   DOI: 10.11772/j.issn.1001-9081.2016.01.0072
Abstract416)      PDF (737KB)(314)       Save
In cloud resource sharing service model, in order to realize the multi-attribute range query of cloud resources, an improved E-SkipNet network was proposed. Firstly, based on the traditional Distributed Hash Table (DHT) network SkipNet, data attributes were added to the setting of NameID and physical nodes were added to single attribute domain to support multi-attribute range queries in E-SkipNet. Secondly, on the basis of the original E-SkipNet network, the physical nodes were simultaneously mapped into multiple logical nodes and added to multiple attribute domains, and the resources were released in accordance with different attributes to different logical nodes in the improved E-SkipNet. Finally, the resources were mapped to logical nodes utilizing uniform locality preserving hashing function, which was the key to support efficient range query. The simulation results show that the routing efficiency of improved E-SkipNet network was respectively increased by 18.09% and 20.47% compared with E-SkipNet and Multi-Attribute Addressable Network (MAAN). The results show that the improved E-SkipNet can support more efficient cloud resource multi-attribute range queries and achieve load balancing in heterogeneous environment.
Reference | Related Articles | Metrics
Multi-slot allocation data transmission algorithm based on dynamic tree topology for wireless sensor network
SUN Li, SONG Xizhong
Journal of Computer Applications    2015, 35 (10): 2858-2862.   DOI: 10.11772/j.issn.1001-9081.2015.10.2858
Abstract367)      PDF (765KB)(952)       Save
Concerning the load imbalance of nodes in Wireless Sensor Network (WSN), a new multi-slot allocation data transmission algorithm was proposed based on dynamic tree topology. The data trasimission mode and slot allocation were analyzed by a tree link model at first. Then the node performed frame slot allocation based on slot requirements by using the relationship between parent and offspring in the tree topology;and a sequence mode for reception slot and a sequence mode for transmission slot were given, so as to allow the node to be more ordered and receive packets sent by the other nodes in less interference channel, reducing waste of time slot and improving utilization efficiency of channel slot. Compared with life cycle extension algorithm for WSN based on data transmission optimization and reliable data transmission algorithm based on energy awareness and time slot allocation, the simulation results show that the network energy efficiency of the proposed algorithm increases by 42.8% and 51.7% respectively, and the average lifetime of the nodes extends by 1.7% and 37.5% respectively, the energy efficiency and network life cycle are optimized.
Reference | Related Articles | Metrics
Research on form dynamic configuration technology for industrial-chain coordination SaaS platform
LYU Rui SUN Linfu LIU Shuya
Journal of Computer Applications    2013, 33 (10): 2984-2988.  
Abstract564)      PDF (767KB)(597)       Save
In order to adapt to the dynamic changes of business requirements of collaborative enterprise in the operation course of collaborative platform, a form configuration model for Software as a Service (SaaS) platform was established. The storage and dynamical load of form configuration model were supported by mapping form structure and form elements with XML document. The method of online dynamic allocation operating authority of the form content was presented. Form online dynamic update technology was realized based on form configuration file access interface. The proposed technology was applied to a SaaS platform industrial chain, which shows that the flexibility is improved and the enterprises have more initiative and control over the management of information systems.
Related Articles | Metrics
Discrete free search algorithm
GUO Xin SUN Lijie LI Guangming JIANG Kaizhong
Journal of Computer Applications    2013, 33 (06): 1563-1570.   DOI: 10.3724/SP.J.1087.2013.01563
Abstract640)      PDF (572KB)(668)       Save
A free search algorithm was proposed for the discrete optimization problem. However,solutions simply got from free search algorithm often have crossover phenomenon. Then, an algorithm free search algorithm combined with cross elimination was put forward, which not only greatly improved the convergence rate of the search process but also enhanced the quality of the results. The experimental results using Traveling Saleman Problem (TSP) standard data show that the performance of the proposed algorithm increases by about 1.6% than that of the genetic algorithm.
Reference | Related Articles | Metrics
Parameters optimization of combined kernel function for support vector machine
GENG Junbao SUN Linkai CHEN Shixue
Journal of Computer Applications    2013, 33 (05): 1321-1356.   DOI: 10.3724/SP.J.1087.2013.01321
Abstract1030)      PDF (600KB)(738)       Save
Concerning the lack of an integrated theory system to select the parameters of combined kernel function used in Support Vector Machine (SVM), one method based on ant colony algorithm and circulated cross validation was put forward to get the optimal parameters. The index named as the mean weighting error was used to evaluate the effect of SVM prediction in different parameters. The value of mean weighting error could be calculated by circulated cross validation. To decrease the calculation workload, the ant colony algorithm was used to enhance the optimization effect of combined kernel function for SVM. This method offered in this paper was applied in the prediction of some plan development cost and the result showed that the optimized combined form of the parameters had the least prediction error. The instance indicates that the parameters optimization method in this paper can improve the prediction precision.
Reference | Related Articles | Metrics
Intuitionistic fuzzy multiple attributes decision making method based on entropy and correlation coefficient
WANG Cui-cui YAO Deng-bao MAO Jun-jun SUN Li
Journal of Computer Applications    2012, 32 (11): 3002-3017.   DOI: 10.3724/SP.J.1087.2012.03002
Abstract975)      PDF (627KB)(503)       Save
In order to deal with the problems that decision information is intuitionistic fuzzy and attribute weights are unknown, a decisionmaking method based on intuitionistic fuzzy entropy and score function was proposed. Firstly, a new concept of intuitionistic fuzzy entropy was presented to measure the intuitionism and fuzziness of intuitionistic fuzzy sets, and relevant properties were also discussed. Secondly, to decrease decision effects of uncertain information, a programming model was constructed to determine attribute weights combined with intuitionistic fuzzy entropy. Meanwhile, in the view of membership, nonmembership and hesitancy degree, correlation coefficients between objects of the universe and the ideal object were constructed, and according to decision makers attitude, the optimal decision was obtained by defining the score function. Finally, the article proposed a multiple attribute decision making method on intuitionistic fuzzy information, and the feasibility and effectiveness of the method are verified through a case study of candidates evaluation.
Related Articles | Metrics
Hadoop-based storage architecture for mass MP3 files
ZHAO Xiao-yong YANG Yang SUN Li-li CHEN Yu
Journal of Computer Applications    2012, 32 (06): 1724-1726.   DOI: 10.3724/SP.J.1087.2012.01724
Abstract1006)      PDF (431KB)(784)       Save
MP3 as the de facto standard for digital music, the number of files is quite large and user access requirements rapidly grow up. How to effectively store and manage vast amounts of MP3 files to provide good user experience has become a matter of bigger concern. The emergence of Hadoop provides us with new ideas. However, because Hadoop itself is not suitable for handling massive small files, this paper presented a Hadoop-based storage architecture for massive MP3 files, fullly using the MP3 file’s rich metadata. The classification algorithm by pre-processing module would merged small files into sequence file, and the introduction of efficient indexing mechanism served as a good solution to the problem of small files. The experimental results show that the approach can achieve a better performance.
Related Articles | Metrics
Authentication scheme for trusted mobile nodes in wireless network
SUN Li-na CHANG Gui-ran WANG Xing-wei
Journal of Computer Applications    2011, 31 (11): 2950-2953.   DOI: 10.3724/SP.J.1087.2011.02950
Abstract1238)      PDF (655KB)(521)       Save
The platform authentication protocol based on the property without the third party and the encryption protocol based on identity were applied to the node authentication scheme under the wireless networks. Compared with the existing methods, the proposed trusted mobile node access scheme has two major features: 1) both the mobile platform identity and the mobile user identity are verified at the same time; 2) the mutual attestation not only between the mobile user and the network Agent, but also between the mobile users are provided. Analysis shows that the improved scheme can meet the anonymous requirement.
Related Articles | Metrics
Anti-collision algorithm for adaptive multi-branch tree based on regressive-style search
Wen-sheng SUN Ling-min HU
Journal of Computer Applications    2011, 31 (08): 2052-2055.   DOI: 10.3724/SP.J.1087.2011.02052
Abstract1152)      PDF (638KB)(886)       Save
Concerning the common problem of tag collision in Radio Frequency Identification (RFID) system, an improved anti-collision algorithm for multi-branch tree was proposed based on the regressive-style search algorithm. According to the characteristics of the tags collision, the presented algorithm adopted the dormancy count, and took quad tree structure when continuous collision appeared, which had the ability to choose the number of forks dynamically during the searching process, reduced the search range and improved the identification efficiency. The performance analysis results show that the system efficiency of the proposed algorithm is about 76.5%; moreover, with the number of tags increased, the superiority of the performance is more obvious.
Reference | Related Articles | Metrics
Constrained role-permission based delegation in pervasive computing
GAO Da-li SUN Ling XIN Yan
Journal of Computer Applications    2011, 31 (05): 1298-1301.   DOI: 10.3724/SP.J.1087.2011.01298
Abstract1164)      PDF (669KB)(885)       Save
Considering the permission delegation in inter-domain access control for pervasive computation environments, a role-permission based delegation method was given based on Role-Based Access Control (RBAC) model. The trust and time constraints were accounted by the importance of the permission. The consistency of the executing model and the delegation conditions was proved. It is shown that the method can satisfy the requirements of permission delegation in pervasive computing environments, and realize the temporal constraints and the dependence on executable role sets.
Related Articles | Metrics
Fast relay selection algorithm based on symbol error probability
SUN Lin MA She-xiang
Journal of Computer Applications    2011, 31 (03): 613-616.   DOI: 10.3724/SP.J.1087.2011.00613
Abstract1233)      PDF (570KB)(981)       Save
Concerning the problem between the optimality and efficiency of the relay selection algorithm, an algorithm of the relay selection based on fast symbol error probability was proposed in Amplify-and-Forward (AF) cooperative networks. First, with the equal power allocation, and based on the statistic channel information and symbol error probability of the system, an equivalent channel gain was brought in. The parameter described the compositive channel character of two phases that were the source node to relay node as well as relay node to the destination in the cooperative process. Then with the descending order of the parameter, the Signal-to-Noise Ratio (SNR) could be taken as the threshold, and different relay node set was chosen to minimize the symbol error probability in case of equal power allocation. And combined with the near optimal power allocation, the proposed scheme further reduced the symbol error probability. As shown in the simulation results, the error probability performance of this relay selection scheme is similar to the optimal full-search scheme but the complexity is reduced to at least 1/20 of the latter one. And it also shows that the error probability of the proposed scheme is lower than the other schemes such as all participating amplify-and-forward (AP-AF) relay selection and pre-select single relay amplify-and-forward (S-AF) relay selection.
Related Articles | Metrics
Computer network information discovery based on information fusion
SUN Liang,LI Dong,ZHANG Tao,XIONG Yong-ping,ZOU Bai-liu
Journal of Computer Applications    2005, 25 (09): 2175-2176.   DOI: 10.3724/SP.J.1087.2005.02175
Abstract868)      PDF (197KB)(842)       Save
The available tools for detecting network information can hardly meet the demands of acquiring the completeness and precision of network information for the researchers.The information fusion technology was applied to collect the network information using several detecting tools.The information from different detecting tools was fused in different layers.In data layer,the fuzzy logical statistic method was adopted to identify system type and network device,and in logic layer,the most credible information was obtained with the support of system knowledge database.
Related Articles | Metrics
Watermarking algorithm for digital image based on DWT and SVD
LIU Feng,SUN Lin-junLIU Feng,SUN Lin-jun
Journal of Computer Applications    2005, 25 (08): 1944-1945.   DOI: 10.3724/SP.J.1087.2005.01944
Abstract1303)      PDF (161KB)(1227)       Save
A watermark algorithm for digital image based on DWT and SVD was proposed.It added gray images as the watermark and increased information capacity of the watermark.The algorithm can satisfy the transparence and robustness of the watermark system. The experiment based on this algorithm demonstrates that the watermark is robust to the common signal processing techniques including JEPG compressing,noise,low pass filter,median filter,contrast enhance.
Related Articles | Metrics